Annolid Bot Provider and Model Setup Tutorial
This tutorial shows how to configure Annolid Bot providers, agent models, local models, and search keys.
1. Where Annolid reads settings
Annolid resolves provider/model settings from:
Environment variables (recommended for secrets).
~/.annolid/llm_settings.jsonfor non-secret config.Profile overrides in
profiles.
Annolid removes secret fields (such as api_key) before persisting settings. Keep keys in environment variables.
2. Core environment variables
Set only what you need:
OLLAMA_HOSTfor local Ollama (default:http://localhost:11434)OPENAI_API_KEYfor OpenAI-compatible providersOPENAI_BASE_URLfor OpenAI-compatible gateways (for example OpenRouter)GEMINI_API_KEY(orGOOGLE_API_KEY) for GeminiBRAVE_API_KEYforweb_searchtool calls
macOS/Linux example:
export OLLAMA_HOST="http://127.0.0.1:11434"
export OPENAI_API_KEY="..."
export OPENAI_BASE_URL="https://api.openai.com/v1"
export GEMINI_API_KEY="..."
export BRAVE_API_KEY="..."
3. Configure local models (Ollama)
Start Ollama and pull model(s), for example:
ollama pull qwen3-vlollama pull llama3.2-vision:latest
In Annolid Bot, choose provider
Ollama (local).Pick an available model from the model selector.
Optional ~/.annolid/llm_settings.json block:
{
"provider": "ollama",
"ollama": {
"host": "http://127.0.0.1:11434",
"preferred_models": ["qwen3-vl", "llama3.2-vision:latest"]
}
}
4. Configure OpenAI models
Set
OPENAI_API_KEY.Keep
OPENAI_BASE_URL=https://api.openai.com/v1(or unset it).In Annolid Bot GUI, choose provider
OpenAI GPTand choose model.
Optional settings block:
{
"openai": {
"base_url": "https://api.openai.com/v1",
"preferred_models": ["gpt-4o-mini", "gpt-4o", "gpt-4.1-mini"]
}
}
5. Configure OpenRouter (OpenAI-compatible path)
Use OpenRouter through the OpenAI-compatible channel:
export OPENAI_API_KEY="sk-or-..."
export OPENAI_BASE_URL="https://openrouter.ai/api/v1"
GUI path:
Open Annolid Bot.
Click
Configure…in the chat panel.In
OpenRoutertab, set:API keyto yoursk-or-...tokenBase URLtohttps://openrouter.ai/api/v1
Then use provider OpenRouter in the GUI and pick/set an OpenRouter model id (for example openai/gpt-4o-mini or another OpenRouter-supported id).
Notes:
This path is robust with current Annolid provider resolution.
For agent loop usage, keep model ids exactly as your gateway expects.
6. Configure Gemini
Set one of:
GEMINI_API_KEYGOOGLE_API_KEY
In GUI, choose provider
Google Gemini.
Optional settings block:
{
"gemini": {
"preferred_models": ["gemini-1.5-flash", "gemini-1.5-pro"]
}
}
7. Configure agent profiles and model routing
Use profiles in ~/.annolid/llm_settings.json to assign different providers/models per agent role:
{
"profiles": {
"playground": {"provider": "openai", "model": "gpt-4o-mini"},
"caption": {"provider": "ollama", "model": "qwen3-vl"},
"research_agent": {"provider": "openai", "model": "gpt-4.1-mini"},
"polygon_agent": {"provider": "gemini", "model": "gemini-1.5-pro"}
},
"agent": {
"temperature": 0.7,
"max_tool_iterations": 12,
"max_history_messages": 24,
"memory_window": 50
}
}
Memory behavior:
memory/MEMORY.mdis injected as long-term facts.memory/HISTORY.mdis an append-only archive for recall/search (not auto-injected).memory/YYYY-MM-DD.mdstores daily notes; archived turns are flushed here before consolidation compacts session history.Retrieval is plugin-based in the workspace memory store; default plugin is lexical (
workspace_lexical_v1) used bymemory_search.
Model name flexibility:
In Annolid Bot model selector, you can type any model id (not limited to the predefined list).
Typed model ids are remembered per provider and shown in future sessions.
8. Configure search key for tool calls
web_search tool calls require:
export BRAVE_API_KEY="..."
If unset, tool execution returns Error: BRAVE_API_KEY not configured.
9. Validate your setup
Run:
annolid agent-security-check
To check/apply agent updates:
annolid-run agent-update --channel stable
annolid-run agent-update --channel stable --apply # dry-run plan
annolid-run agent-update --channel stable --apply --execute
annolid-run agent-update --channel stable --require-signature
Update pipeline uses staged phases:
preflight -> stage -> verify -> apply -> restart -> post_check.
If apply/post-check fails, a rollback plan is attached to the JSON report.
To run evaluation/replay with regression gating:
annolid-run agent-eval \
--traces ./eval/traces.jsonl \
--baseline-responses ./eval/baseline.jsonl \
--candidate-responses ./eval/candidate.jsonl \
--out ./eval/report.json \
--max-regressions 0
This reports:
key/config file permission posture,
whether secret-like fields were persisted,
provider key presence for OpenAI/Gemini.
Exit code:
0: OK1: Warning
10. Recommended secure workflow
Keep secrets only in env vars.
Keep
llm_settings.jsonfor model/provider preferences.Use
agent-security-checkafter changes.Rotate keys immediately if exposed.